首页> 外文OA文献 >Runtime Optimizations for Tree-Based Machine Learning Models
【2h】

Runtime Optimizations for Tree-Based Machine Learning Models

机译:基于树的机器学习模型的运行时优化

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression trees for learning to rank. Although exceedingly simple conceptually, most implementations of tree-based models do not efficiently utilize modern superscalar processors. By laying out data structures in memory in a more cache-conscious fashion, removing branches from the execution flow using a technique called predication, and micro-batching predictions using a technique called vectorization, we are able to better exploit modern processor architectures. Experiments on synthetic data and on three standard learning-to-rank datasets show that our approach is significantly faster than standard implementations.
机译:基于htmlabstractTree的模型已被证明是解决Web排名以及不同领域中其他机器学习问题的有效解决方案。本文着重于优化应用此类模型进行预测的运行时性能,尤其是使用梯度增强的回归树来学习排名。尽管从概念上讲极其简单,但是大多数基于树的模型的实现并未有效利用现代超标量处理器。通过以更具缓存意识的方式在内存中布置数据结构,使用称为谓词的技术从执行流中删除分支以及使用称为向量化的技术进行微分批预测,我们可以更好地利用现代处理器体系结构。对合成数据和三个标准学习排名数据进行的实验表明,我们的方法比标准实现要快得多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号